15 - MLPDES25: Diffusion effects in optimal transport and mean-field planning models [ID:57535]
50 von 260 angezeigt

Thanks to the organizers for the invitation. It's my first time in Erlangen so I'm happy to be here.

My talk will definitely be on the PD side but since it's concerned with optimal transport,

well I mean I shouldn't even maybe recall the links between optimal transport and machine learning.

In particular I just mentioned two directions which first one is now really celebrated in

many applications and to many extents which originated from these words and computational

optimal transport and particularly the single algorithm and the introduction of what is now

called entropic optimal transport and the other is maybe less related to what I'm going to discuss

which is just the use maybe of Wasserstein distance as a loss function and supervised

learning. In any of those contexts I would say that there are two points which might be related

to this talk. One is regularizing versions of Wasserstein distance and the other are maybe

possibly different geometric properties of the transport model that we are choosing.

In this talk I will discuss dynamical optimal transfer so I will not just focus on transforming

one measure into another measure but essentially I will discuss the dynamic version so the trajectory

that realizes optimal transport and those are models that indeed regularize Wasserstein geodesics.

They do that you will see in a minute essentially by penalizing through the cost, by penalizing

congestion effects. The drawback is that they are mainly going to give in the trajectory I mean

measures with densities and so the focus of this talk is how this kind of models somehow enhance

diffusivity with respect to the standard say Moncz-Kontrowicz problem at different levels.

So we'll discuss relation with quasi-linear elliptic equations and also finite or infinite

speed propagation. So those are the problems so they are very simple to to to state to introduce

because you can see that they are essentially additive perturbations of the classical Wasserstein.

Distance problem which would just correspond to the so you take any curve in the space of

probability measures joining M0 and M1 and normally I mean the Wasserstein just consists

in minimizing the velocity of the curve which is the first term and now you think that you are just

adding to this another criterion where F is a super linear function convex so we are in a nice

somehow choice in a way that you suppose that this will penalize concentrations. There are two main

examples I will discuss and the two you should keep in mind. One is the power case so super

linear power as a cost and the other is the entropy case where you just sum to the typical

velocity of the curve a little bit of the relative entropy along the whole trajectory. So it can be

a relative entropy in particular if you want to consider non-compact state space like for example

the whole Euclidean space but just to keep things simple I will suppose that a state space

Omega is a compact manifold without boundary so things are definitely simpler. I should certainly

mention where these problems come from they originated in particular already many years ago

as variants of the Benamou-Brenier energy in particular in problems of traffic flow or fluid

dynamics essentially in congestion models Giuseppe Buttazzo with many papers worked on that and then

I mean there are many people that of course are associated to optimal transport like Benamou,

Carlier, Sant'Ambrogio and I got interested in this from the community of Menfill games or

Menfill control because you have a natural interpretation also in terms of Menfill theories

and in this kind of context in particular in Menfill games you have this Nash equilibrium

that emerged from the individual optimizations which gives rise to a Hamilton-Chicago equation

and the evolution of the collective behavior through the density equation. So the typical

setting is that the particles so these are Menfill models so they somehow come from particle system

but the particles are rational agents so they are dynamical states in these models those those

dynamical states are deterministic because you don't see any Brownian motion in the continuity

equation and you may just think that there is a an objective function for any single agent and

this objective function wants to optimize say velocity or just cost for the control plus it

which of course takes into account the preferences and in particular if this small f would be the

derivative of the capital F you have seen before so it would be an increasing function and this

means that you are going to you would like to avoid two congested areas the Nash equilibrium

between individual and mass solves a system between Hamilton-Jacobi of the single agent

Presenters

Prof. Alessio Porretta Prof. Alessio Porretta

Zugänglich über

Offener Zugang

Dauer

00:32:15 Min

Aufnahmedatum

2025-04-29

Hochgeladen am

2025-04-30 15:41:06

Sprache

en-US

#MLPDES25 Machine Learning and PDEs Workshop 
Mon. – Wed. April 28 – 30, 2025
HOST: FAU MoD, Research Center for Mathematics of Data at FAU, Friedrich-Alexander-Universität Erlangen-Nürnberg Erlangen – Bavaria (Germany)
 
SPEAKERS 
• Paola Antonietti. Politecnico di Milano
 • Alessandro Coclite. Politecnico di Bari
 • Fariba Fahroo. Air Force Office of Scientific Research
 • Giovanni Fantuzzi. FAU MoD/DCN-AvH, Friedrich-Alexander-Universität Erlangen-Nürnberg
 • Borjan Geshkovski. Inria, Sorbonne Université
 • Paola Goatin. Inria, Sophia-Antipolis
 • Shi Jin. SJTU, Shanghai Jiao Tong University 
 • Alexander Keimer. Universität Rostock
 • Felix J. Knutson. Air Force Office of Scientific Research
 • Anne Koelewijn. FAU MoD, Friedrich-Alexander-Universität Erlangen-Nürnberg
 • Günter Leugering. FAU, Friedrich-Alexander-Universität Erlangen-Nürnberg
 • Lorenzo Liverani. FAU, Friedrich-Alexander-Universität Erlangen-Nürnberg
 • Camilla Nobili. University of Surrey
 • Gianluca Orlando. Politecnico di Bari
 • Michele Palladino. Università degli Studi dell’Aquila
 • Gabriel Peyré. CNRS, ENS-PSL
 • Alessio Porretta. Università di Roma Tor Vergata
 • Francesco Regazzoni. Politecnico di Milano
 • Domènec Ruiz-Balet. Université Paris Dauphine
 • Daniel Tenbrinck. FAU, Friedrich-Alexander-Universität Erlangen-Nürnberg
 • Daniela Tonon. Università di Padova
 • Juncheng Wei. Chinese University of Hong Kong
 • Yaoyu Zhang. Shanghai Jiao Tong University
 • Wei Zhu. Georgia Institute of Technology
 
SCIENTIFIC COMMITTEE 
• Giuseppe Maria Coclite. Politecnico di Bari
• Enrique Zuazua. FAU MoD/DCN-AvH, Friedrich-Alexander-Universität Erlangen-Nürnberg
 
ORGANIZING COMMITTEE 
• Darlis Bracho Tudares. FAU MoD/DCN-AvH, Friedrich-Alexander-Universität Erlangen-Nürnberg
• Nicola De Nitti. Università di Pisa
• Lorenzo Liverani. FAU DCN-AvH, Friedrich-Alexander-Universität Erlangen-Nürnberg
 
Video teaser of the #MLPDES25 Workshop: https://youtu.be/4sJPBkXYw3M
 
 
#FAU #FAUMoD #MLPDES25 #workshop #erlangen #bavaria #germany #deutschland #mathematics #research #machinelearning #neuralnetworks

Tags

Erlangen mathematics Neural Network PDE Applied Mathematics FAU MoD Partial Differential Equations Bavaria Machine Learning FAU MoD workshop FAU
Einbetten
Wordpress FAU Plugin
iFrame
Teilen